首页> 外文OA文献 >Regularized Risk Minimization by Nesterov's Accelerated Gradient Methods: Algorithmic Extensions and Empirical Studies
【2h】

Regularized Risk Minimization by Nesterov's Accelerated Gradient Methods: Algorithmic Extensions and Empirical Studies

机译:Nesterov加速梯度规范风险最小化   方法:算法扩展和实证研究

摘要

Nesterov's accelerated gradient methods (AGM) have been successfully appliedin many machine learning areas. However, their empirical performance ontraining max-margin models has been inferior to existing specialized solvers.In this paper, we first extend AGM to strongly convex and composite objectivefunctions with Bregman style prox-functions. Our unifying framework covers boththe $\infty$-memory and 1-memory styles of AGM, tunes the Lipschiz constantadaptively, and bounds the duality gap. Then we demonstrate various ways toapply this framework of methods to a wide range of machine learning problems.Emphasis will be given on their rate of convergence and how to efficientlycompute the gradient and optimize the models. The experimental results showthat with our extensions AGM outperforms state-of-the-art solvers on max-marginmodels.
机译:Nesterov的加速梯度方法(AGM)已成功应用于许多机器学习领域。但是,它们在训练最大余量模型上的经验性能不如现有的专业求解器。本文首先将AGM扩展为具有Bregman样式逼近函数的强凸和复合目标函数。我们的统一框架涵盖AGM的$ \ infty $内存和1内存两种样式,可自适应地调整Lipschiz,并限制了对偶间隙。然后,我们演示了将这种方法框架应用于广泛的机器学习问题的各种方法,将重点介绍它们的收敛速度以及如何有效地计算梯度和优化模型。实验结果表明,通过我们的扩展,AGM在max-margin模型上的表现优于最先进的求解器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号